![]() Plenoptic camera for mobile devices
专利摘要:
Plenoptic camera for mobile devices, comprising a main lens (102), a microlens array (104), an image sensor (108), and a first reflective element (510) configured to reflect the light rays (601 a) captured by the plenoptic camera before arriving at the image sensor (108), in order to fold the optical path of the light captured by the camera before impinging the image sensor. Additional reflective elements (512) may also be used to further fold the light path inside the camera. The reflective elements (510, 512) can be prisms, mirrors or reflective surfaces (804b; 814b) of three-sided optical elements (802) having two refractive surfaces (804a, 804c) that form a lens element of the main lens (102). By equipping mobile devices with this plenoptic camera, the focal length can be greatly increased while maintaining the thickness of the mobile device under current constraints. 公开号:ES2854573A1 申请号:ES202090040 申请日:2018-06-14 公开日:2021-09-21 发明作者:Leticia Carrion;Jorge Blasco;Francisco Clemente;Francisco Alventosa;Arnau Calatayud;Carles Montoliu;Adolfo Martinez;Iván Perino 申请人:Photonic Sensors and Algorithms SL; IPC主号:
专利说明:
[0003] Field of the invention [0004] The present invention is within the field of microlens arrays, optical systems incorporating microlens arrays, light field imaging, light field cameras, and plenoptic cameras. [0005] State of the art [0006] Plenoptic cameras are imaging devices capable of capturing not only spatial information but also angular information from a scene. This captured information is known as the light field, which can be represented as a four-dimensional tuple LF ( px, py, lx, ly ), where px and py select the direction of arrival of the rays to the sensor and Ix, ly are the spatial position of these rays. A plenoptic chamber is usually made up of an array of microlenses placed in front of a sensor. [0007] This system is equivalent to capturing the scene from various points of view (the so-called plenoptic views); therefore, a plenoptic chamber can be considered a multiview system. Another system that can capture a field of light can be formed by an array of several cameras. Consequently, information about the depths of the different objects (that is, the distance between the object itself and the camera) in the scene is captured implicitly in the light field. This capability of plenoptic cameras has a wide number of applications with respect to depth mapping and 3D imaging. [0008] In 2012, Lytro introduced the first commercially available single-array plenoptic chamber on the international market and, three years later, the Lytro Illum chamber. Since then, no other light field cameras have hit the consumer electronics market. The first Lytro plenoptic camera had mechanical dimensions along the optical axis of 12 cm, and the Lytro Illum camera had an objective lens (like DSLR cameras) of more than 12 cm and a total size of approximately 20 cm. The Lytro Illum camera's improved optics, with a dedicated five-lens variable focal length objective lens, allowed the Illum camera to outperform the first Lytro camera. Following these two forays into consumer cameras, Lytro moved into a very different market: the film market, producing extremely large cameras in which the length The optical system can be dozens of centimeters, with 755 megapixel sensors and extremely heavy solutions. These cameras are not handheld cameras, but professional film cameras that must be held by tripods or heavy mechanical structures. [0009] In addition to Lytro, Raytrix has also launched several products based on light field technology, geared towards industrial applications. These cameras are large cameras with large objective lenses that ensure good depth estimation performance. [0010] In conclusion, light field cameras have shown good performance in terms of 3D imaging and depth detection. However, plenum cameras have never been brought to the mobile device market because they are really difficult to miniaturize. US patent 9,647,150-B2 discloses a method of manufacturing miniaturized plenoptic sensors. However, as already explained, the smallest plenoptic chamber released to the consumer electronics market is the 12cm Lytro chamber. [0011] Performance in plenoptic cameras depends on key optical design factors, such as focal length and f-number, where a large focal length or small f-number can dramatically improve camera performance. Although it is easy to find small f-numbers in smartphone lenses, it is very difficult to design and manufacture large focal lengths to meet the design rules of the smartphone market, due to the very small thicknesses of the modules that place difficult constraints on the MTTL (Mechanical Total Track Length) of the cameras. [0012] In addition, the current smartphone market tends to reduce the dimensions of mini cameras more and more with each generation, increasing the difficulty of designing large focal lengths. Therefore, there is a need to introduce light field technology to the smartphone market with a significant increase in focal length, but at the same time complying with the mechanical restrictions in terms of smartphone size. [0013] Definitions: [0014] - Plenoptic camera: A device capable of capturing not only the spatial position but also the direction of arrival of the incident light rays. [0015] - Multi-view system: System capable of capturing a scene from various points of view. A plenoptic chamber can be considered a multiview system. Stereoscopic and multistereoscopic cameras are also considered systems multiview. [0016] - Light field: four-dimensional structure LF ( px, py, lx, ly ) that contains the information of the light captured by the pixels (px, py) under the microlenses (lx, ly) in a plenoptic chamber. [0017] - Depth: distance between the plane of an object point in a scene and the main plane of the camera, both planes are perpendicular to the optical axis. [0018] - Plenoptic view: two-dimensional image formed by taking a subset of the light field structure choosing a certain value ( px, py), always the same (px, py) for each of the microlenses (lx, ly). [0019] - Array of microlenses (MLA): matrix of small lenses (microlenses). [0020] - Depth map: two-dimensional image in which the depth values calculated from the world of objects are added as an additional value to each pixel (x, y) of the two-dimensional image, composing depth = f (x, y). - Disparity: Distance between two (or more) projections of an object point in a camera. [0021] - Baseline: Difference between the position of two (or more) cameras in a stereoscopic (or multistereoscopic) configuration. [0022] - Folded optics: optical system in which the optical path is bent through reflective elements such as prisms or mirrors, so that the thickness of the system is changed to achieve a certain thickness specification. [0023] - OTTL (Total Optical Track Length): length of the optical path followed by the light from the point where it enters the optical system and to the point where it reaches the sensor. [0024] - MTTL (Mechanical Total Track Length): total length of the device required to include the mechanical parts of the optical system. [0025] - Prism or mirror: refers to the optical component used to reflect light at a certain angle, doubling the optical path of light. [0026] Summary of the invention [0027] In order to introduce light field technology to the smartphone market, this document presents a new concept of plenoptic camera, where a prism or mirror or other reflective element is used to fold the optical path of the lens, allowing you to design lenses with long focal lengths without increasing the lens thickness. [0028] A first aspect of the present invention refers to a plenoptic chamber for mobile devices that comprises a main lens, an array of microlenses, an image sensor and a first reflective element (preferably a prism or a mirror) configured to reflect the light rays captured by the plenoptic chamber before reaching the image sensor, in order to fold the optical path of the captured light by the camera before striking the image sensor. [0029] In one embodiment, the first reflective element is arranged to receive the captured light rays before reaching the main lens. In another embodiment, the first reflective element is arranged to receive the light rays already focused by the main lens. When only one reflective element is used, the optical axis of the main lens is preferably parallel to the surface of the image sensor (in this way, the optical path is folded 90 degrees or any other arbitrary angle). [0030] In another embodiment, the plenoptic chamber comprises one or more additional reflective elements (preferably prisms or mirrors) configured to reflect the light rays reflected by the first reflective element before reaching the image sensor. Therefore, additional reflective elements are sandwiched between the first reflective element and the image sensor, in order to further fold the optical path and help reduce the physical dimensions of the plenoptic chamber on a given axis. [0031] The main lens can comprise a plurality of lens elements. In particular, the main lens may comprise a first set and a second set of lens elements, each set comprising one or more concentric lens elements. The physical arrangement of both sets of lens elements can be such that the optical axis of the first set of lens elements is perpendicular to the optical axis of the second set of lens elements and parallel to the image sensor. In one embodiment, the first reflective element is disposed between the first and second sets of lens elements. In another embodiment, the first reflective element is arranged to receive the light rays captured before reaching the main lens, and the plenoptic chamber comprises a second reflective element arranged between the first set and the second set of lens elements, wherein The second reflective element is configured to reflect the light rays reflected by the first reflective element and already focused by the first set of lens elements, before reaching the image sensor. [0032] Another aspect of the present invention refers to a camera module for mobile devices comprising the plenoptic chamber described above. This camera module can be, for example, a separate part, directly integrated into a smartphone (for example, inserted into the smartphone or connected to the back cover of the smartphone) by a coupling means and electrical contacts. In the chamber module, the components of the plenoptic chamber are arranged in such a way that the thickness of the chamber module is less than 10 mm. [0033] A further aspect of the present invention relates to a mobile device, preferably a smartphone, comprising the plenoptic chamber or the camera module described above. In the mobile device, the image sensor of the plenoptic chamber can be arranged such that the perpendicular line of the image sensor is parallel to the rear side of the mobile device. In this way, the light path of the light rays captured by the camera is bent by the first reflective element (and optionally by additional reflective elements), which makes it possible to reduce the thickness of the mobile device. In the mobile device, the components of the plenoptic chamber are preferably arranged such that the thickness of the mobile device is less than 10 mm. [0034] Brief description of the figures [0035] A series of drawings, which help to better understand the invention and which are expressly related to embodiments of said invention, presented as non-limiting examples thereof, are described very briefly below. [0036] Figure 1A represents a schematic side view of a plenoptic camera system with an image sensor, a microlens array and a field lens, according to the prior art. Figure 1B represents, in a front view, the micro images produced by the micro lenses on the image sensor. Figure 1C shows the pixels that form a microimage of the image sensor. [0037] Figure 2 illustrates the disparity between two projections of the same object point through two cameras separated from each other by a baseline b. [0038] Figure 3 shows the error in the calculations of depth versus the actual distance of objects in the world of objects for different focal lengths in a plenoptic chamber. [0039] Figure 4 shows a typical camera module for smartphones. Figure 5A depicts a plenoptic chamber according to the prior art, with a pure plenoptic (unfolded) configuration. Figures 5B and 5C show a plenoptic chamber according to two different embodiments of the present invention, with a folded optic configuration. [0040] Figures 6A-6D show four different plenoptic chamber embodiments in accordance with the present invention. [0041] Figure 7 shows a schematic example of a plenoptic chamber according to with the present invention, installed inside a smartphone. [0042] Figures 8A-8D show four other embodiments of plenoptic chamber devices with folded optics configurations. [0043] Figure 9 shows an image sensor with its appropriate image circle. Figure 10A shows a 3D view of a plenoptic chamber with folded optics configuration. Figure 10B shows a 3D view of a plenoptic chamber with folded optics configuration where the lenses have been cut to reduce the thickness of the device in the Z axis. [0044] Detailed description [0045] Conventional cameras capture two-dimensional spatial information from light rays captured by the sensor. Furthermore, color information can also be captured using so-called Bayer pattern sensors or other color sensors. However, no information about the direction of arrival of the rays is recorded by a conventional camera. Plenoptic cameras have the ability to record 3D information about different objects. Basically, a plenoptic chamber is equivalent to capturing the scene from various points of view (so-called plenoptic views that act as several cameras distributed around the equivalent aperture of the plenoptic chamber). [0046] Typically, a plenoptic chamber 100 (see Figure 1A ) is made by placing an array of microlenses 104 between the main lens 102 and the image sensor 108. Each of the microlenses 106 ( Ix, ly ) is forming a small image. , known as microimage (110a, 110b), of the main aperture on image sensor 108 (see Figures 1B and 1C ), such that each pixel ( px, py) of any microimage (110a, 110b) is capturing light rays 101 coming from a different part of the main aperture, each of the microimages below any microlens is an image of the main lens aperture, and each pixel at the position pxl, py1 to pxn, p and n in each microlens 106 integrates light from a given part of the aperture ( axn, ayn) regardless of the position of the microlens. The light that passes through the aperture at position ( axn, ayn) from different locations in the world of objects will collide with different microlenses but will always be integrated by the pixel ( pxn, pyn) below each microlens of the camera. Consequently, the coordinates ( px, py) of a pixel within a microimage determine the direction of arrival of the captured rays to a given microlens and ( lx, ly) determine the two-dimensional spatial position. All this information is known as a light field and can be represented by a four-dimensional LF matrix ( px, py, lx, ly) or a matrix five-dimensional LF ( px, py, lx, ly, c) if the color information (c) is considered. As mentioned above, in some key respects a plenoptic camera behaves like a multistereoscopic camera (since they are both multiview systems) with a reduced baseline between views. That is, multistereoscopic systems can also record the light field. The behavior of stereoscopic and multistereoscopic cameras has been extensively studied. Articles like "Quantization Error in Stereo Imaging" [Rodríguez, JJ, and Aggarwal, JK Quantization Error in Stereo Imaging. In Computer Vision and Pattern Recognition, 1988. Proceedings CVPR'88, Computer Society Conference on (pages 153-158). IEEE] show how long focal lengths improve the estimation of depth errors at relatively long distances in multiview systems. [0047] The depth estimation of a stereoscopic camera follows the equation: [0051] where z is the point of depth of interest, b is the baseline, f the focal length of the cameras (if both cameras have the same focal length), and d the disparity. The disparity d represents the difference in the position of two projections (or more projections in the case of a multistereoscopic system) of the same point in the world of objects, in the two (or more) cameras of a stereoscopic (multistereoscopic) system, As an example, figure 2 shows two cameras separated from each other by a baseline b , and how, when the light from point P in the world of objects passes through the two equivalent lenses c1 and c2 of the two cameras and reaches the sensors s1 and s2 of the two cameras in two different sensor positions, the disparity d is the distance between the two images p and p2 of the same point P on the two sensors s1 and s2. [0052] From the above equation, the depth estimation error can be calculated as: [0056] where Az represents the absolute depth error, and Ad represents the absolute disparity error. [0057] A plenoptic chamber follows the same equation for the error produced in depth calculations. In this case, the baseline corresponds to the aperture size of the optical system (). [0058] A z = z 2 A d = z 2 - / # Ad, [0060] where f # = f / D (that is, the number f). [0061] Therefore, the depth error Az produced in a plenoptic chamber can be reduced by increasing the focal length f of the optical system while maintaining the f-number, reducing the f-number while maintaining the focal length f (i.e. , increasing D), or reducing the f-number while increasing the focal length f. Mobile phone lenses are commonly designed with small f-numbers and small focal lengths (due to the restrictive thickness requirements of the mobile phone industry). Starting from a commercially available design of a lens for a smartphone, which has a small f-number and a small focal length, Figure 3 shows how the depth estimation error is quadratically reduced with increasing focal length when held the f number. The focal length error ( fi), which is a small focal length commonly found in the mobile phone industry, is four times greater than the focal length error f 2 ( f2 = 2fi), and nine times greater than the error produced by f f = 3fi). [0062] However, increasing the focal length generally means increasing the OTTL (Optical Total Track Length) of an optical system. Even if it depends on the particular optical design, the relationship between the focal length and the OTTL follows approximately [0063] OTTL the expression 1,1 < 1,3 in the deployed configurations, therefore a [0064] Increasing the focal length implies an almost proportional increase in the OTTL to keep the f-number constant and therefore an increase in the MTTL (mechanical total track length), which makes the camera module (like the module camera 400 for smartphones depicted in Figure 4 ) is thicker (i.e. a large Sz). [0065] Figure 4 shows a schematic diagram of a typical camera module 400 for mobile devices, such as smartphones, in order to be illustrative but never limiting. Important dimensions (Sx x Sy x Sz) have been highlighted. Typical dimensions of camera modules used in the mobile phone industry are as follows: 4mm <Sz <6.5mm; 8 mm <Sy <10 mm; 8 mm <Sz <10 mm, where Sx, Sy and Sz correspond to the width, height and thickness of the chamber module 400, respectively (according to the X, Y and Z axes of Figure 7). [0066] The most critical dimension is Sz, which coincides with the MTz (Mechanical Track in z). This size Sz of the camera module 400 has to be less than the thickness Tz of the mobile device, as shown in Figure 7, and mobile phone manufacturers tend to move to smaller thicknesses with each new generation of phones. This means that it is necessary for cameras to follow these trends if the object is to fit them into the mobile device. Camera modules with Sz thicknesses greater than 10mm would be strongly rejected by the market, which targets cameras with an Sz approaching 5 and 4mm. [0067] Today, smartphone market trends call for reduced Sz thickness for mini cameras, forcing vendors to design lenses with greatly reduced focal lengths f to meet customer specifications. Miniaturized plenoptic cameras (such as those disclosed in patent document US9647150B2), even if they were never commercially released by anyone else with a form factor similar to Figure 4, can have greatly improved performance if the focal length f it is increased to values not commonly seen in conventional imaging lenses in the minicamera industry. Therefore, it is imperative to increase the focal length of a specific plenoptic system without violating the design rules of the smartphone market (which require very small thicknesses) to improve the precision of depth errors and bring the mini plenoptic camera to the highest level. of depth / 3D cameras for portable devices. [0068] A first approach to increasing the focal length f is to scale all the components of the optical system, increasing all dimensions while maintaining the f-number. This implies changing the main lenses, changing the microlenses and the sensor itself, in such a way that it also forces an increase in the dimensions of the OTTL and MTTL, safely exceeding the requirements of the smartphone market in terms of small thicknesses (Sz). [0069] A second approach to increasing the focal length f could be to scale the main lens but keep the size of the sensor and microlenses. The focal length f of the plenoptic chamber would increase but, because the microlenses and sensor size are kept the same, the FOV (field of view) would be reduced due to the fact that the sensor is no longer capturing all of the FOV of the optical system, but only a subset. And what is worse, in this case the OTTL and MTTL would also be increased, leading to an increase in the length of the main lens and making it difficult to use it in mobile phone applications. [0070] These approaches to increasing the focal length f allow to improve the error in the depth calculations (these move the design point towards the lower curves in figure 3), making the camera more accurate, with lower error percentages for the estimation depths of objects located farther from the camera (i.e., with longer distances in figure 3). However, the resulting OTTL and MTTL increase and do not conform to the restrictive thickness specifications of current mini smartphone cameras or, in other words, a module like in figure 4 would have too large a Sz thickness to fit within a Mobile phone. [0071] In this context, in the present invention a prism or mirror is used to bend the optical path of light, increasing the OTTL without increasing the thickness Sz of the camera module 400. Therefore, a plenoptic device is presented here. Novel with folded optics configurations. [0072] Figures 5A-5C show various embodiments of a plenoptic chamber, showing the benefits of folded devices in terms of thickness. In all of these embodiments, the primary lens 102 is comprised of a single lens element, or a pair or group of cemented lens elements. In these figures, the term OT refers to the optical track length and MT refers to the mechanical track length. The length of the mechanical track in the Z axis (MTz) represented in figure 7 is the critical dimension to consider when fitting the camera into a mobile phone because it corresponds to the thickness Tz of the device (or, in other words, making that the thickness Sz is as small as possible in the chamber module 400 of FIG. 4). The three embodiments of Figures 5A-5C have the same optical performance in terms of focal length f and number f, but a different MTz. [0073] Figure 5A depicts a typical plenoptic chamber 500a according to the prior art. The configuration of this plenoptic camera 500a is designed with a small f-number and a large focal length f in order to obtain good depth error accuracy. The optical axis 502 of the main lens 102 is perpendicular to the image sensor 108, traversing the center of the image sensor 108 (that is, the normal line 504 of the image sensor 108 at its center point is coincident with the optical axis 502) . However, this configuration has a oTTL a = otz to large, which implies a = MTZ MTTL big that does not fit within the typical dimensions of a smart phone. [0074] Figure 5B shows a plenoptic chamber 500b in accordance with one embodiment of the present invention. The plenoptic chamber 500b shown in Figure 5B uses a folded optic that reduces the MTz while maintaining the same focal length (the OTTL and the f-number remain the same as in Figure 5A). In this configuration, the optical path is bent using a reflective surface of a first reflective element 510, such as a prism or a mirror, therefore the OTTL b has two components, OTz b and OTx b , but the OTTL b is the same. than used in figure 5A ( OTTL0 = OTTLa = OTza = OTz0 OTx0). In the configuration shown in FIG. 5B, the optical axis 502 of the main lens 502 is parallel to the image sensor 108 (that is, the optical axis 502 and the normal line 504 of the image sensor are perpendicular). [0075] However, unlike the previous configuration, the MTz thickness of the camera module has been reduced enough to fit within the low thickness requirements of the minicamera specifications, while retaining the benefits of large focal lengths. for plenoptic chamber systems. Or, in other words, the plenoptic chambers 500a and 500b in Figures 5A and 5B offer the same optical performance and the same f-number; However, the thickness of the plenoptic chamber 500a in Figure 5A is greater than the thickness of the plenoptic chamber 500b in Figure 5B (MTz a > MTz b ) or, if implemented in a module as in Figure 4, the thickness Sz would be less for the embodiment shown in Figure 5B. [0076] FIG. 5C depicts a plenoptic chamber 500c in accordance with another embodiment of the present invention. This plenoptic chamber 500c has a configuration in which two reflective elements, a first reflective element 510 and a second reflective element 512, have been introduced to bend the optical path. The second reflective element 512 (such as a prism or a mirror) reflects light rays that have already been reflected by the first reflective element 510. Additional reflective elements can be used (eg, a third reflective element, a fourth reflective element , etc.) to further reflect the light rays reflected by the above reflective elements located along the optical path. The OTTL in Figure 5C has three components, OTz 1c , OTx c, and OTz 2c , where their sum coincides with OTTL a ( OTTLC = OTTLa = OTzlc OTxc OTz 2c) in Figure 5A, such that the focal length remains constant and the MTz has been drastically reduced (MTz c <MTz b <MTz a ). In the configuration shown in Figure 5C, the optical axis 502 of the main lens 102 and the normal line of the image sensor 108 at its center point are parallel but not coincident (that is, they are located at different heights), due to that the optical path has been folded twice along the way. [0077] Figures 6A-6D show various embodiments of plenoptic camera devices (600a, 600b, 600c, 600d) with folded optic configuration, in order to be illustrative, but never limiting, where the main lens 102 is composed of a plurality of lens groups or uncemented lens elements. The plenoptic camera devices shown in this figure are made up of an image sensor 108, a microlens array 104, an infrared filter 612 (one element optional which may not be present) and a primary lens 102 comprised of four or five lens elements, but this could be comprised of fewer or more lens elements. [0078] Each configuration shows a different MTz (the mechanical track length in the Z axis corresponding to the thickness Tz of the mobile device), as represented in figure 7. Each figure represents the X, Y and Z axes corresponding to those shown in the figure 7, according to the installation of the plenoptic chamber in the mobile device (in Figures 6A-6C, the image sensor 108 extends along the Z axis, while, in the embodiment of Figure 6D, the sensor image 108 extends along the X-axis). In all cases, the introduction of a first reflective element 510 (preferably a prism or mirror) that folds the light path reduces the MTz from the original unfolded configuration. As can be seen in Figures 6A-6D, in all cases, MTz <OTTL and, of course, MTz <MTTL (considering the original unfolded configuration to calculate the MTTL). [0079] In the first configuration, shown in Figure 6A, the first reflective element 510, such as a prism or mirror positioned at 45 degrees with respect to the optical axis, reflects the rays of light 601a captured by the plenoptic chamber 600a just before passing through any optical surface, that is, before reaching any of the lens elements (620, 622, 624, 626, 628) of the main lens 102. In the example of Figure 6A, the light rays 601b reflected from the first element reflective 510 (and which form a certain angle with respect to the captured light rays 601a) reach the main lens 102. It is very clear from Figure 6A that MTz a <OTTL a , which in practical terms means that the thickness Sz of the camera module 400 (figure 4) is smaller and easier to fit within the strict requirements of a mobile phone. [0080] In the second configuration, shown in Figure 6B, the main lens 102 comprises a first set (630, 632) and a second set (634, 636) of lens elements. The plenoptic chamber 600b of Figure 6B bends the captured light rays 601a after they have passed through the first set of lens elements (the first two lenses 630 and 632) of the main lens 102 (in this case, an achromatic doublet ) with the aid of a first reflective element 510, a prism or mirror, positioned at 45 degrees with respect to the optical axes of both sets of lens elements. In this case, the MTz b = MTz a y, in both cases, is limited by the sensor chip dimensions (Dx in Figures 6A-6C). However, due to packaging reasons and / or due to optical design, it might be better to bend the light before or after passing through various optical surfaces. [0081] The third configuration (FIG. 6C) shows a primary lens 102 made up of five lens elements divided into a first set (640, 642, 644) and a second set (646, 648) of lens elements. The captured light rays 601a are reflected after passing through the first set of lens elements (the first three lens elements 640, 642, 644), obtaining the reflected light rays 601b striking the second set (646, 648) of lens elements and the image sensor 108. Again, MTzc <OTTL = OTzc OTxc. [0082] Figure 6D shows a fourth configuration in which, in addition to the first reflective element 510, a second reflective element 512 (eg, a prism or mirror) is used to reduce the thickness MTz (MTz <OTTL = OTxd OTzd). In this case, the sensor extends along the x dimension and therefore its chip dimension does not limit the MTz. In this embodiment, the main lens 102 is made up of four lens elements divided into a first set (650, 652) and a second set (654, 656) of lens elements. The first reflective element 510 is arranged to receive the captured light rays 601a before they reach the main lens 102, to obtain reflected light rays 601b. The second reflective element 512 is arranged between both sets of lens elements and reflects the reflected light rays 601b to obtain additional reflected light rays 601c that impinge on the second set (654, 656) of lens elements and the image sensor. 108. [0083] As explained in Figures 5A-5C and 6A-6D above, folded optics make it possible to reduce the thickness (MTz, or Sz in Figure 4 and Tz in Figure 7) of cameras with large focal lengths that would commonly lead to dimensions Large Sz (you can fit high focal lengths into really slim modules with low MTz or Sz, as shown in figure 6D, for example). As already mentioned, in all the cases of Figures 5B-5C and 6A-6D, the thickness of the chamber is drastically reduced from its original thickness (the MTTL in the equivalent deployed configuration), allowing large chambers to fit into Wearable devices that, were it not for the use of folded optics technology, would never be able to meet the specifications of the smartphone industry in terms of thickness. [0084] The reduced thickness plenoptic chamber proposed by the present invention is suitable to be installed in any mobile device with strict thickness restrictions, such as a tablet, PDA or a smartphone. Figure 7 shows an example of a plenoptic chamber embedded in a smartphone 700 having a configuration similar to those depicted in the embodiments of Figures 6B and 6C. As shown in figure 7, the plenoptic chamber is preferably installed in the back or rear side 710 of the smartphone 700, capturing images from behind the screen. Alternatively, the plenoptic camera can be installed on the front side of the 700 smartphone, next to the screen, to capture frontal images. The smartphone 700 has the following dimensions on the X, Y, and Z axes depicted in Figure 7: a width Tx, a length Ty, and a thickness Tz, respectively. This example is intended to be illustrative but not limiting. In this example, the main lens 102 of the plenoptic chamber is made up of five lens elements (730, 732, 734, 736, 738), and a first reflective element 510 (a prism or mirror) reflects light after it is has passed through a first set of lens elements (the first two lens elements 730 and 732) of the main lens 102, as in the embodiment of FIG. 6B. The other three lens elements (734, 736, 738), which make up the second set of lens elements, the microlens array 104, and the image sensor 108, are distributed along the X-axis, without contributing to the thickness MTz. chamber (the critical size Sz in figure 4). Instead, these elements could be distributed along the Y axis, or in any arrangement such that the normal line 504 of the image sensor 108 and the optical axis of the second set of lens elements are parallel to the XY plane. [0085] In conclusion, this proposed new folded optics technique allows to have, at the same time, a superior plenoptic performance (with long focal lengths and small f-number) and a small Mtz (or thickness Sz of the 400 camera module), being ideal for its integration into portable devices such as smartphones, tablets, laptops, etc. [0086] Figures 8A-8D show four more embodiments of plenoptic chamber devices (800a, 800b, 800c, 800d) with folded optics configurations where full folded plenoptic designs (including the first reflective element 510) have been further described with the object to be illustrative but never limiting. [0087] In these four embodiments, a first prismatic lens 802 is used as the first reflective element 510 (and, optionally, an additional (second, third ...) prismatic lens 812 is used as the second (third, fourth ...) element. reflective 512). In this embodiment, the prismatic lens 802 (and optionally any additional second, third, etc., lens or additional prismatic lenses 812 used) is basically a three-surface optical element in which the middle surface is reflective (e.g. , a body or a prism in which all three surfaces can be aspheres rather than flat surfaces) and is made of glass or plastic. Two of the 802/812 prismatic lens surfaces (a first surface 804a / 814a and a third surface 804c / 814c) are refractive surfaces, and a second middle surface 804b / 814b is a reflective surface. Therefore, the 802/812 prismatic lens is an optical element that integrates a lens element of the main lens (formed with the two refractive surfaces 804a / 814a and 804c / 814c) together with the reflective element 510/512 (formed by reflective surface 804b / 814b) that folds the light path. The light rays that pass through the first surface 804a / 814a of the prismatic lens 802/812 have a different optical axis (usually perpendicular) than the rays that pass through the third surface 804c / 814c due to the reflection produced in the second surface 804b. / 814b from the 802/812 prismatic lens. The two refractive surfaces (804a / 814a, 804c / 814c) can have convex, concave or aspherical shapes, and the reflective surface can be flat or convex / concave, spherical or aspherical. [0088] The use of prismatic lenses allows the light path to be folded, achieving long OTTL optical total track lengths (and therefore long effective focal length, EFFL) within small thicknesses. Also, integrating the 802/812 prismatic lens along with the other lens elements of the optical system is easier than using, for example, a single mirror, where alignment tasks are surely more difficult. Having an 802/812 prismatic lens with its two well-defined refractive surfaces (804a / 814a, 804c / 814c), and therefore well-defined optical axes, facilitates alignment processes. [0089] Various prismatic lens options and implementations have been integrated into the different embodiments of Figures 8A-8D and will be detailed below. [0090] Figure 8A shows a primary lens containing five lens elements (802, 822, 824, 826, 828), an optional infrared filter 612, a microlens array 104, and an image sensor 108. The first lens element is a prismatic lens 802 that integrates, within a single entity, a convex lens (a first 804a and a third 804c surfaces) and the first reflective element 510 (a second surface 804b). The first surface 804a has a convex shape (but this could also be concave or flat), the second surface 804b is a flat surface (but this could be any other non-flat surface) at 45 degrees (but could be other angles) in relation with the optical axis, and this flat surface reflects light towards the third surface 804c, a second convex surface (but this could be concave or flat). The first convex surface 804a of the prismatic lens 802 refracts the light rays 801a captured by the plenoptic chamber 800a, then these rays are reflected by the first reflective element 510 (the flat surface 804b) of the prismatic lens 802 a total of 90 degrees along the optical axis (but this could be different if the first reflective element 510 is not tilted 45 degrees from the optical axis). The first optical axis extends along the Z axis and, after the reflective element 510, the light follows a second optical axis (X axis), which is perpendicular to the first, reaching the third surface 804c of the prismatic lens 802. The light then passes through the other lens elements (822, 824, 826, 828), reaching the microlens array 104 and finally the image sensor 108. [0091] The embodiment of Figure 8A has a total optical track length OTTLa (OTza 1 + OTxa 1 ) of 11.2 mm (but this could be longer or shorter); however, use of the prism lens 802 for folding the light path can extend most oTTL camera along the X axis, which leads to a thickness of MTZ to about 5.1 mm ( but this could be even shorter, or longer), which makes this lens with a very large OTTL suitable for integration into a mobile phone thanks to its reduced thickness. In this case, the thickness (Z axis) is limited by the size of the sensor. Note that the extreme field rays represented in Figures 8A-D refer to the field on the diagonal of the sensor (although these are in the XZ plane), leading to MTz s (MTz a , MTz b , MTz c , MTz d ), which refer to the diagonal of the image sensor 108. This is common practice in optical design, because the lenses typically have rotational symmetry, but the sensor is rectangular. This is the reason why the last lens diameter coincides with the height of the last ray that passes through that last surface. This folded design allows the focal length to be extended up to 9.3mm (in this example, but this could be longer), dramatically improving depth sensing performance and still meeting very small thickness requirements on mobile phones (MTz to 5.1 mm). [0092] The example of the lens in Figure 8A should not be construed as a limiting option, but only as an illustrative example of how a folded design can achieve a large focal length and excellent depth performance with a small thickness. The main camera lens 102 may have more or fewer lens elements in order to improve optical performance (in this example, five lens elements, but it could be fewer or more lenses), the non-reflective surfaces of the prismatic lens 802 can be formed of convex surfaces, flat surfaces, concave surfaces, or any aspherical surface that the designer may deem appropriate. Also, the reflective element 510 (which is, in this case, a flat surface inclined 45 degrees with respect to the optical axis) could be a convex or concave reflective surface (or a flat surface inclined at any other angle to the optical axis). The prismatic lens 802 can be positioned as a first lens element, or as a rear lens element, after one or more lens elements (eg, as a second lens element). [0093] Figure 8B shows an embodiment in which two prismatic lenses (a first prismatic lens 802 and a second prismatic lens 812) have been used to fold the light path twice. In this case, the size of the image sensor 108 will not be limiting the thickness of the lens because the image sensor 108 extends along the X and Y dimensions (it is no longer a limiting factor in the Z dimension: the sensor rectangle extends along the XY dimensions of Figure 7 and the rectangular sensor chip is no longer required to be smaller than Tz in Figure 7, as is imposed by the embodiment in Figure 8A ). The main lens 102 is also made up of five elements (802, 832, 834, 836, 812), an optional infrared filter 612, the microlens array 104, and the image sensor 108. The first four lens elements (802, 832, 834, 836) are similar to the embodiment described in Figure 8A. The fifth lens element is a second prismatic lens 812 with two refractive surfaces (first 814a and third 814c surfaces) that have aspherical shapes, and a second reflective surface (in the example, a flat surface at 45 degrees) that acts as a second reflective element 512. In this case, the total OTTL optical track length of the lens (OTzb1 OTxb1 OTzb2) is about 12.9mm (but this could be longer or shorter). The use of a folded optic allows to have a thickness MTzb of only a few millimeters (approximately 5.0 mm in the embodiment). Furthermore, the use of a second reflective element 512 allows the effective focal length EFFL to be further increased (up to 13.2mm in the example versus 9.3mm in the embodiment of Figure 8A), dramatically improving depth detection performance. of the plenoptic chamber. [0094] Again, the embodiment of Figure 8B should not be construed as limiting, but only as an example. The lens can be made up of fewer or more elements than the five elements in the example, and of fewer or more than two reflective elements (510, 512), which can be either prisms or mirrors, or a combination of prisms and mirrors. . Reflective element 510 and any additional reflective element 512 can reflect incoming light rays at a 90 degree angle or any other arbitrary angle. The reflective elements can be arranged (i.e. tilted) such that the angle of incidence (and therefore the corresponding angle of reflection) of the incident light rays can be any angle within the range (0- 90), and preferably within the range [22.5-67.5]. In some embodiments, the angle of reflection is preferably 45 °, thereby obtaining an optical path that folds 90 degrees. [0095] Figure 8C shows an embodiment in which two prismatic lenses (802,812) have been integrated into a four element lens such that two lens elements of all lens elements (802,842,812,844) of the main lens 102 are prismatic lenses (802,812). Also, the camera 800c comprises an optional infrared filter 612, a microlens array 104, and an image sensor 108. The first prismatic lens 802 is similar to that of FIG. 8A. However, the second prismatic lens 812 is formed by a concave-planar lens like the first 814a and the third 814c surfaces, and the second reflective surface 814b is a flat surface at 45 degrees in front of the optical axis. The second prismatic lens 812 is located between two regular aspherical lenses (842, 844). In this case, the main lens has an OTTL optical total track length (OTzc 1 + OTxc 1 + OTzc 2 ) of 12.0 mm with an EFFL effective focal length of 10.4 mm, and the thickness MTz c is 5 , 7 mm. In this case, the MTz thickness is limited by the size of the prismatic lenses (802, 812) and the thickness of the last regular lens element 844. If the priority is to reduce the MTz thickness as much as possible, the best solution is, clearly, the use of prismatic lenses as the first and / or last lens element. [0096] Figure 8D shows another embodiment of a plenoptic chamber with a folded optic configuration, where two prismatic lenses (802, 812) have been used in a main lens 102 composed of five lens elements (802, 852, 854, 856, 812). . In this case, the first prismatic lens 802 is similar to that of Figure 8A; however, a small concavity has been inserted into the reflective element 510 (so small that it cannot be seen in the schematic diagram of FIG. 8D). The second prismatic lens 812 integrates an aspherical lens and a concave reflective surface 814b (rather than flat as in embodiments 8A through 8C). The inclusion of non-planar reflective surfaces (804b, 814b) complicates the design but has manufacturing advantages. The main lens has an OTTL optical total track length (OTZd 1 + OTXd 1 + OTZd 2 ) of 14mm, with an EFFL effective focal length of 12.4mm. In this embodiment, the thickness MTz d of the lens is 6.2 mm. [0097] The use of one or more prismatic lenses (802, 812) allows to have a large OTTL optical total track length with small MTz s thicknesses. In all cases, the thickness MTz is less than 6.5 mm and therefore it can be integrated into a modern mobile phone, where the thickness never exceeds, as usual practice, 7.5 mm for the rear cameras and 5 mm for the front cameras. [0098] In addition to the prismatic lens technique or any other folded optics technique As described above to reduce the thickness MTz of the device, other strategies could also be used to reduce the thickness of the device. As explained above, lenses commonly have rotational symmetry, while image sensors 108 are not circular (they are rectangular). This means that the lens has to be optimized to exhibit good optical performance along the entire diagonal of the image sensor 108 to ensure good performance across the entire sensor, but some of the optimized field is wasted due to the shape of image sensor 108 (photons of light that collide within the dotted circle in Figure 9 but not within the rectangle of the sensor's active area are not used, the photons are not converted to electrons). Figure 9 shows an image sensor 108 with ISz x ISx side sizes and its corresponding image circle. The diameter of the image circle is set by the diagonal of the sensor's active area; However, because the sensor is rectangular, a non-negligible part of the optimized field is wasted and therefore a non-negligible part of the lens area is not being used for useful light that collides with the active area of the lens. sensor. [0099] Figure 10A shows a plenoptic chamber 1000a where, as in any normal chamber, the rotational symmetry of the lenses produces an image circle that collides with the microlens array (104) and that finally collides with the image sensor (108). which, in Figure 10A, is like the circle in Figure 9 but in fact the image sensor 108 and the microlens array 104 are rectangles as in Figures 9 and 10B, and the light colliding within the circle but outside the rectangle of the image sensor 108 it plays no useful role. Figure 10A shows a folded plenoptic chamber with five lens elements (1002, 1004, 1006, 1008, 1010), in which four different rays are represented reaching the four corners of the image sensor 108, limiting the field of view ( FOV) from plenoptic chamber 1000a (the light in the very center of the FOV that reaches the center of image sensor 108 is also depicted in FIG. 10A). [0100] The light reaching the surface of the microlens array 104 in FIG. 10A outside of the rectangle formed by the four dots limiting the FOV is of no use. In Figure 10B only the active area of the image sensor 108 is shown (where the photons are converted into electrons) and the microlens array 104. lenses 1008 and 1010 in Figure 10A also truncated in Figure 10B ( truncated lenses 1018 and 1020), eliminating the part of those lenses that would transmit light to the circle but outside the rectangle in Figure 9 (that is, outside the active area of the sensor). [0101] The net result is that plenoptic chambers 1000a and 1000b are functionally identical but, in chamber 1000a, the MTTL (the thickness MTz a ) is set by the outer circle of lenses 1008 and 1010 (or by the outer circle of Figure 9) while, in chamber 1000b, the MTTL (thickness MTz b ) is set by lens 1012 (exactly the same as lens 1002 in camera 1000a, larger in z dimension than truncated lenses 1018 and 1020). [0102] In summary, and as shown in the embodiments of Figures 8A-8D, there are a large number of degrees of freedom for the design of the primary lens. The refractive surfaces (804a, 804c; 814a, 814c) of the prismatic lens can be concave, convex, flat, aspherical, or any combination thereof. The reflective surface (804b; 814b) can be flat or convex / concave, and these can have any degree of inclination (not necessarily 45 degrees off the optical axis as shown in most figures). Prismatic lenses can be positioned as the first lens element of the primary lens, as the last lens element of the primary lens, or between regular element lenses of the primary lens, depending on the particular needs of the design. The number of prismatic lenses can also be variable (one, two or more prismatic lenses can be used). And, in addition, the lenses can be cut to reduce the thickness. [0103] The embodiments shown in the figures of this document are only examples that should not be construed as a limiting feature, the scope of the invention should only be derived from the claims, because there is an unlimited number of possible embodiments that cannot be covered. with examples but become apparent to one skilled in the art after having had access to the present invention. For example, in all the changes in the direction of propagation of light on the reflective surfaces in Figures 5B, 5C, 6A, 6B, 6C, 6D, 7, 8A, 8B, 8C, 8D, 10A and 10B, the incident rays and the reflected ones are perpendicular; however, the practical design with different incident and reflected angles might be suitable for some applications. For example, in some silicon sensors, the active photosensitive area is not perfectly centered within the silicon wafer area and, for example, in plenoptic chamber 1000b, it might be convenient to move the silicon sensor 108 slightly to the right or to the right. On the left, that could be done by constructing the reflective surface on lens 1012 at angles slightly higher or lower than 45 degrees in front of the optical axis of the first surface of lens 1012 (obviously, in this case, the sensor would not be perfectly parallel or perpendicular to the outer structure of the mobile phone, but a miniaturization problem would be solved).
权利要求:
Claims (27) [1] 1. - A plenoptic chamber comprising a main lens (102), an array of microlenses (104) and an image sensor (108), characterized in that the plenoptic chamber also comprises a first reflective element (510) configured to reflect the Light rays (601a) captured by the plenoptic camera before reaching the image sensor (108). [2] 2. - The plenoptic chamber of claim 1, comprising a three-sided optical element (802) that has two refractive surfaces (804a, 804c) that form a lens element of the main lens (102) and a reflective surface ( 804b) that forms the first reflective element (510). [3] 3. - The plenoptic chamber of claim 2, characterized in that the three-sided optical element (802) is made of glass or plastic. [4] 4. - The plenoptic chamber of any of claims 2 to 3, characterized in that the refractive surfaces (804a, 804c) of the three-sided optical element (802) are flat surfaces, convex surfaces, concave surfaces, aspherical surfaces or a combination from the same. [5] 5. - The plenoptic chamber of any of claims 2 to 4, characterized in that the reflective surface (804b) of the three-sided optical element (802) is a flat surface, a convex surface or a concave surface. [6] 6. - The plenoptic chamber of claim 1, characterized in that the first reflective element (510) is a prism. [7] 7. - The plenoptic chamber of claim 1, characterized in that the first reflective element (510) is a mirror. [8] 8. - The plenoptic chamber of any of claims 1 to 7, characterized in that the first reflective element (510) is arranged to receive the captured light rays (601a) before reaching the main lens (102). [9] [10] 10. [11] eleven. [12] 12. [13] 13. [14] The plenoptic chamber of any of claims 12 to 13, characterized in that the refractive surfaces (814a, 814c) of the optical element (812) are flat surfaces, convex surfaces, concave surfaces, aspherical surfaces, or a combination thereof. [15] fifteen. [16] 16. [17] 17. [18] 18. [19] 19. [20] twenty. [21] twenty-one. [22] 22. [23] 2. 3. [24] 24. [25] 25. [26] 26. [27] 27. - The mobile device of any of claims 24 to 26, characterized in that the mobile device is a smartphone (700).
类似技术:
公开号 | 公开日 | 专利标题 ES2714278T3|2019-05-28|Multi-camera system using folded optics ES2685719T3|2018-10-10|Folding optics matrix camera with refractive prisms ES2466820T3|2014-06-11|Image processing device ES2487241T3|2014-08-20|Imaging device US9967547B2|2018-05-08|Wafer level optics for folded optic passive depth sensing system US9880391B2|2018-01-30|Lens array modules and wafer-level techniques for fabricating the same TWI652519B|2019-03-01|Multi-aperture imaging device including optical substrate US9507129B2|2016-11-29|Apparatus comprising a compact catadioptric telescope JP2002171537A|2002-06-14|Compound image pickup system, image pickup device and electronic device US9986223B2|2018-05-29|Folded optic passive depth sensing system CN110888216B|2021-10-22|Optical lens, lens module and terminal US8836805B2|2014-09-16|Curved sensor system JP2005283616A|2005-10-13|Imaging apparatus JP2020510868A|2020-04-09|Compact bendable camera structure US7116351B2|2006-10-03|Imaging device ES2854573A1|2021-09-21|Plenoptic camera for mobile devices WO2006009088A1|2006-01-26|Imaging apparatus US20140118526A1|2014-05-01|Curvilinear sensor system Jo et al.2013|Design of omnidirectional camera lens system with catadioptic system JP5850191B1|2016-02-03|Optical system and imaging system TW202141968A|2021-11-01|Image sensor, photographing device and electronic equipment US20200257093A1|2020-08-13|Imaging device and imaging optical system US20200259975A1|2020-08-13|Imaging device US20200257094A1|2020-08-13|Imaging device and imaging optical system KR20170092195A|2017-08-11|Camera Module
同族专利:
公开号 | 公开日 GB202014312D0|2020-10-28| JP2021529334A|2021-10-28| CN112005150A|2020-11-27| WO2019174758A1|2019-09-19| WO2019174756A1|2019-09-19| DE112018007283T5|2020-12-31| KR20200131836A|2020-11-24| GB2585782A|2021-01-20| US20210366968A1|2021-11-25|
引用文献:
公开号 | 申请日 | 公开日 | 申请人 | 专利标题 US20090296238A1|2008-05-29|2009-12-03|Tamron Co., Ltd.|Lens Apparatus and Image Pickup Apparatus Using the Same| US8358366B1|2010-05-28|2013-01-22|Adobe Systems Incorporate|Methods and apparatus for high-speed digital imaging| US20130321668A1|2012-05-30|2013-12-05|Ajith Kamath|Plural Focal-Plane Imaging| US20150168699A1|2013-12-12|2015-06-18|Samsung Electronics Co., Ltd.|Catadioptric light-field lens and image pickup apparatus including the same| CN100561270C|2005-09-30|2009-11-18|鸿富锦精密工业(深圳)有限公司|Numerical camera mould| US9647150B2|2013-05-21|2017-05-09|Jorge Vicente Blasco Claret|Monolithic integration of plenoptic lenses on photosensor substrates|EP3875656A1|2020-03-04|2021-09-08|LG Electronics Inc.|Laundry treating apparatus|
法律状态:
2021-09-21| BA2A| Patent application published|Ref document number: 2854573 Country of ref document: ES Kind code of ref document: A1 Effective date: 20210921 |
优先权:
[返回顶部]
申请号 | 申请日 | 专利标题 ES201800082|2018-03-15| PCT/EP2018/065883|WO2019174758A1|2018-03-15|2018-06-14|Plenoptic camera for mobile devices| 相关专利
Sulfonates, polymers, resist compositions and patterning process
Washing machine
Washing machine
Device for fixture finishing and tension adjusting of membrane
Structure for Equipping Band in a Plane Cathode Ray Tube
Process for preparation of 7 alpha-carboxyl 9, 11-epoxy steroids and intermediates useful therein an
国家/地区
|